104 research outputs found

    Visual-Vestibular Feedback for Enhanced Situational Awareness in Teleoperation of UAVs

    Get PDF
    This paper presents a novel concept for improving the situational awareness of a ground operator in remote control of a Unmanned Arial Vehicle (UAV). To this end, we propose to integrate vestibular feedback with the usual visual feedback obtained from a UAV onboard camera. We use our motion platform, the CyberMotion simulator, so as to reproduce online the desired motion cues. We test this architecture by flying a small-scale quadcopter and run a detailed performance evaluation on 12 test subjects. We then discuss the results in terms of possible benefits for facilitating the remote control task

    Robotic assembly of complex planar parts: An experimental evaluation

    Full text link
    In this paper we present an experimental evaluation of automatic robotic assembly of complex planar parts. The torque-controlled DLR light-weight robot, equipped with an on-board camera (eye-in-hand configuration), is committed with the task of looking for given parts on a table, picking them, and inserting them inside the corresponding holes on a movable plate. Visual servoing techniques are used for fine positioning over the selected part/hole, while insertion is based on active compliance control of the robot and robust assembly planning in order to align the parts automatically with the hole. Execution of the complete task is validated through extensive experiments, and performance of humans and robot are compared in terms of overall execution time

    Acceleration-level control of the CyberCarpet

    Full text link
    The CyberCarpet is an actuated platform that allows unconstrained locomotion of a walking user for VR exploration. The platform has two actuating devices (linear and angular) and the motion control problem is dual to that of nonholonomic wheeled mobile robots. The main control objective is to keep the walker close to the platform center. We first recall global kinematic control schemes developed at the velocity level, i.e., with the linear and angular velocities of the platform as input commands. Then, we use backstepping techniques and the theory of cascaded systems to move the design to control laws at the acceleration level. Acceleration control is more suitable to take into account the limitations imposed to the platform motion by the actuation system and/or the physiological bounds on the human walker. In particular, the availability of platform accelerations allows the analytical computation of the apparent accelerations felt by the user

    Roll rate thresholds and perceived realism in driving simulation

    Get PDF
    Due to limited operational space, in dynamic driving simulators it is common practice to implement motion cueing algorithms that tilt the simulator cabin to reproduce sustained accelerations. In order to avoid conflicting inertial cues, the tilt rate is kept below drivers’ perceptual thresholds, which are typically derived from the results of classical vestibular research where additional sensory cues to self-motion are removed. Here we conduct two experiments in order to assess whether higher tilt limits can be employed to expand the user’s perceptual workspace of dynamic driving simulators. In the first experiment we measure detection thresholds for roll in conditions that closely resemble typical driving. In the second experiment we measure drivers’ perceived realism in slalom driving for sub-, near- and supra-threshold roll rates. Results show that detection threshold for roll in an active driving task is remarkably higher than the limits currently used in motion cueing algorithms to drive simulators. Supra-threshold roll rates in the slalom task are also rated as more realistic. Overall, our findings suggest that higher tilt limits can be successfully implemented in motion cueing algorithms to better optimize simulator operational space

    A novel framework for closed-loop robotic motion simulation - Part II: motion cueing design and experimental validation

    Get PDF
    This paper, divided in two Parts, considers the problem of realizing a 6-DOF closed-loop motion simulator by exploiting an anthropomorphic serial manipulator as motion platform. After having proposed a suitable inverse kinematics scheme in Part I [1], we address here the other key issue, i.e., devising a motion cueing algorithm tailored to the specific robot motion envelope. An extension of the well-known classical washout filter designed in cylindrical coordinates will provide an effective solution to this problem. The paper will then present a thorough experimental evaluation of the overall architecture (inverse kinematics + motion cueing) on the chosen scenario: closed-loop simulation of a Formula 1 racing car. This will prove the feasibility of our approach in fully exploiting the robot motion capabilities as a motion simulator

    Використання асоціативних схем (кластерів) на уроках біології

    Get PDF
    In this paper we propose and experimentally validate a bilateral teleoperation framework where a group of UAVs are controlled over an unreliable network with typical intercontinental time delays and packet losses. This setting is meant to represent a realistic and challenging situation for the stability the bilateral closed-loop system. In order to increase human telepresence, the system provides the operator with both a video stream coming from the onboard cameras mounted on the UAVs, and with a suitable haptic cue, generated by a forcefeedback device, informative of the UAV tracking performance and presence of impediments on the remote site. In addition to the theoretical background, we describe the hardware and software implementation of this intercontinental teleoperation: this is composed of a semi-autonomous group of multiple quadrotor UAVs, a 3-DOF haptic interface, and a network connection based on a VPN tunnel between Germany and South Korea. The whole software framework is based upon the Robotic Operating System (ROS) communication standard

    Interaktionen zwischen Menschen und autonomen Robotern: Human-Robot Interaction by Shared Autonomy

    No full text
    Roboter nehmen Informationen aus ihrer Umgebung auf, verarbeiten und nutzen diese, um autonom Aufgaben zu erfüllen. Eine große Herausforderung besteht in der Zusammenarbeit von Menschen und Robotern im Alltagsleben. Um diese Vision zu realisieren, müssen Design und Steuerung auf die Bedürfnisse von Menschen und die Interaktion mit ihnen abgestimmt werden. Geleitet von diesen Prinzipien liegt das Ziel der Wissenschaftler darin, halb-autonome Robotersysteme (shared control scenario) zu realisieren, welche – von Menschen angeleitet – in der Lage sind, begrenzte Aufgaben selbständig zu übernehmen
    corecore